Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement ONNX Pad Operator #2007

Merged
merged 4 commits into from
Jul 23, 2024
Merged

Conversation

johnhuichen
Copy link
Contributor

@johnhuichen johnhuichen commented Jul 11, 2024

Pull Request Template

Checklist

  • Confirmed that run-checks all script has been executed.
  • Made sure the book is up to date with changes in this PR.

Related Issues/PRs

#1714

Changes

Implement ONNX pad by calling existing tensor.pad

Input tensor: currently supporting only Float32 but will extend the support to other floats, int and boolean values

Pads: currently supporting only positive pads

Constant value: type restricted to the input tensor element type, currently supporting only Float32

mode: currently supporting only "constant"

TODO:

Support int, boolean tensors

Currently passing a negative constant value may cause an issue. Need to investigate further

Testing

  1. Test ONNX model produces expected outputs for f32 input tensor and positive pads before building ONNX model
  2. Test pad codegen creates expected rust code
  3. Test generated pad rust code produces expected outputs for f32 input tensor and positive pads

@johnhuichen johnhuichen changed the title Create ONNX test model builder Implement ONNX Pad Operator Jul 11, 2024
@johnhuichen johnhuichen marked this pull request as ready for review July 11, 2024 17:13
Copy link

codecov bot commented Jul 11, 2024

Codecov Report

Attention: Patch coverage is 99.37500% with 1 line in your changes missing coverage. Please review.

Project coverage is 86.45%. Comparing base (35345de) to head (706b48b).
Report is 18 commits behind head on main.

Files Patch % Lines
crates/burn-import/src/burn/node/pad.rs 98.50% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2007      +/-   ##
==========================================
+ Coverage   84.40%   86.45%   +2.04%     
==========================================
  Files         842      684     -158     
  Lines      105132    87413   -17719     
==========================================
- Hits        88736    75572   -13164     
+ Misses      16396    11841    -4555     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@antimora antimora requested review from laggui and antimora July 12, 2024 20:03
Copy link
Collaborator

@antimora antimora left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for your addition!

Overall looks good. However, we need to tighten the pad configuration. We need to panic aggressively if the user is passing more configuration that our current APIs support. It will be an opportunity to enhance Burn's pad function.

crates/burn-import/src/onnx/op_configuration.rs Outdated Show resolved Hide resolved
crates/burn-import/src/onnx/op_configuration.rs Outdated Show resolved Hide resolved
Comment on lines 774 to 778
let left = pads[input_dim - 1];
let top = pads[input_dim - 2];
let right = pads[pads.len() - 1];
let bottom = pads[pads.len() - 2];
vec![left, right, top, bottom]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This assumes the user wants to pad last 2 dimensions which works for the Burn's pad but it's silent. We should break if ONNX configuration does not fit Burn's API. Obviously ONNX version is more generic, which requires Burn's code be updated as well (not required for this PR).

Comment on lines 783 to 787
// TODO: support int, boolean
let constant_value: f32 = match &node.inputs[2].value {
Some(Data::Float32s(shape)) => shape.first().unwrap().to_owned(),
_ => 0.,
};
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to panic and not be silent. Also, there could be only one input (data) and attributes for others see OpSet 1

Copy link
Member

@laggui laggui left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for adding this implementation! I think overall it looks good.

I agree with the comments made by @antimora, we need to be more strict/clear about the unsupported configurations.

crates/burn-import/src/onnx/op_configuration.rs Outdated Show resolved Hide resolved
@johnhuichen johnhuichen force-pushed the implement-onnx-pad branch 2 times, most recently from df2fee2 to 40f2a78 Compare July 15, 2024 14:29
@johnhuichen
Copy link
Contributor Author

@antimora @laggui

I have made onnx conversion stricter by forcing three inputs. Also added a warning about pad applying to only the last two dimensions.

can you guys take another look?

Copy link
Member

@laggui laggui left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's better! I think there's still just one issue with the current way the padding is parsed.. we should not simply warn when the operation might be incorrect.

We should make sure that the provided values will work for the Burn conversion, otherwise panic.

Comment on lines 782 to 798
if input_dim > 2 {
log::warn!("Pad: padding will only be applied to the last two dimensions");
}

let left = pads[input_dim - 1];
let top = pads[input_dim - 2];
let right = pads[pads.len() - 1];
let bottom = pads[pads.len() - 2];
vec![left, right, top, bottom]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the correct conversion when parsing pads should check the pad order [x1_begin, x2_begin, ..., x1_end, x2_end, ...], and we should make sure that all dimensions outside of the last two are not padded otherwise the conversion is not valid (and thus, panic instead of warn).

Following the given spec example, a 4d input with padded on the last two dimensions should have pads = [0, 0, x2_begin, x3_begin, 0, 0, x2_end, x3_end] (i.e., x1_begin, x1_end, x2_begin, and x2_end are all zero)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok that's a good idea

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have pushed the update

Comment on lines 781 to 787

let left_index = input_dim - 1;
let top_index = input_dim - 2;
let right_index = pads.len() - 1;
let bottom_index = pads.len() - 2;
let index_list = [left_index, top_index, right_index, bottom_index];

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems to accept pads with pads.len() > 4 but Burn API only accepts 4. Right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If pads.len() > 4, this code checks if the user wants to pad any dimension than the last two:

yes -> panic
no -> the pad operation is doable, and it calculates l, t, r, b

Copy link
Member

@laggui laggui Jul 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The way the padding is specified for the ONNX Pad op is a bit different since it supports padding more than just the last two dimensions. So we have to parse the provided pads to make sure it is valid, and then return the Burn valid padding with only 4 values for (left, right, top, bottom).

I think the check here is valid.

For a 4d input where we should only accept padding the last two dimensions, it means that pads should be [0, 0, x2_begin, x3_begin, 0, 0, x2_end, x3_end]. For the last two dimensions expected to be [H, W], this means that x2_begin = top, x3_begin = left, x2_end = bottom and x3_end = right.

And the op config returns only the (left, right, top, bottom) to generate the burn code.

@laggui laggui requested a review from antimora July 22, 2024 12:39
Copy link
Collaborator

@antimora antimora left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One minor fix is required. Please see my inlined comment. We should be able to merge afterwards.

Comment on lines 751 to 753
if node.inputs.len() != 3 {
panic!("Pad: must provide three inputs")
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The says at least 2 required (so check should be node.inputs.len() > 1)

3rd and 4th inputs are optional.

Comment on lines 802 to 805
match &node.inputs[2].value {
Some(Data::Float32s(shape)) => shape.first().unwrap().to_owned(),
_ => panic!("Pad: should provide a constant value input to pad with, for example 0.0"),
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

3rd input is optional and should be default to 0 if absent.

pad now requires 2 or more arguments
if the third argument is not given, it will default to 0
Copy link
Collaborator

@antimora antimora left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good. Thanks for fixing. Minor wording update needed but I am approving in advance.

if node.inputs.len() != 3 {
panic!("Pad: must provide three inputs")
if node.inputs.len() < 2 {
panic!("Pad: must provide two inputs")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would add "at least".

"must provide two inputs" => "must provide at least two inputs"

Change panic comment from needing two inputs. This comes from the fact that the ONNX spec requires two necessary inputs but could have more two more optional argument.
@mepatrick73 mepatrick73 merged commit 4a3fc9d into tracel-ai:main Jul 23, 2024
6 of 12 checks passed
@johnhuichen johnhuichen deleted the implement-onnx-pad branch July 24, 2024 16:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants